About the Project

This is Intro to Open Data Science. A course for everyone and a course for anywhere. The course runs for 7 weeks (Tue. 16-18:00) and consists of multiple on-line projects and assignments.

My Repository


IODS Week 2

This weeks assignments included data wrangling and analysis!

Brief Description of Data

The data in this table is student responses to questions regarding 3 types of learning apporaches (Deep,Surface, and Strategic). The dataset also includes personal characteristics like age, gender. Students were asked several questions (scale 1-5) regarding each type of learning. The average of these responses is presented in the table.

Summary of Results

For this data set, 66% of respondents were female and 34% were male. Their ages ranged from 17-55 (mean 25.5) years old. Points ranged from 7-33 (mean 22.7) The average scale rating of “attitude”“,”deep“,”stra“, and”surf" was 3.1, 3.6, 3.1, and 2.7 respectively

Code

print(students2014) summary(students2014)

Regression Models

Code

library(ggplot2) qplot(attitude, points, data = students2014) + geom_smooth(method = “lm”)

fit a linear model

points_v_attitude <- lm(points ~ attitude, data = students2014)

Interpretation

There is a statistically significant positive correlation between attitude and points (p=4.13e-09).The multiple R-squared value was 0.1906. This means that the strength of the correlation was relatively weak.

Diagnostic Plots

my_model2 <- lm(points ~ attitude + stra, data = students2014)

par(mfrow = c(2,2)) plot(my_model2, which = c(1,2,5))

These diagnostic plots test various aspects of the data such as normality, linearity, and the effect of outliers. These data appear to be normally distributed with a few outliers that could be affecting the linear regression model.


IODS Week 3

The Data

This data evaluates student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por).

For this analysis, the two data sets were combined and only those cases with the same student’s repsonse in BOTH data sets were included.

Description of Variables

For full description of the variables Click Here

alc_use is the average of ‘Dalc’ and ‘Walc’ high_use is TRUE if ‘alc_use’ is higher than 2 and FALSE otherwise

The focus of this analysis is on the relationships between high/low alcohol consumption and selected other variables in the data

My Hypotheses

1) Parental education affects alcohol use

2) Study time affects alcohol use

3) Access to internet affects alcohol use

4) Address affects alcohol use

Analysis

alc <- read.csv(file = “alc.csv”, header=TRUE, sep=“,”)

Looking at Various Plots

## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
## Observations: 382
## Variables: 36
## $ X          <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, ...
## $ school     <fct> GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP, GP,...
## $ sex        <fct> F, F, F, F, F, M, M, F, M, M, F, F, M, M, M, F, F, ...
## $ age        <int> 18, 17, 15, 15, 16, 16, 16, 17, 15, 15, 15, 15, 15,...
## $ address    <fct> U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, U, ...
## $ famsize    <fct> GT3, GT3, LE3, GT3, GT3, LE3, LE3, GT3, LE3, GT3, G...
## $ Pstatus    <fct> A, T, T, T, T, T, T, A, A, T, T, T, T, T, A, T, T, ...
## $ Medu       <int> 4, 1, 1, 4, 3, 4, 2, 4, 3, 3, 4, 2, 4, 4, 2, 4, 4, ...
## $ Fedu       <int> 4, 1, 1, 2, 3, 3, 2, 4, 2, 4, 4, 1, 4, 3, 2, 4, 4, ...
## $ Mjob       <fct> at_home, at_home, at_home, health, other, services,...
## $ Fjob       <fct> teacher, other, other, services, other, other, othe...
## $ reason     <fct> course, course, other, home, home, reputation, home...
## $ nursery    <fct> yes, no, yes, yes, yes, yes, yes, yes, yes, yes, ye...
## $ internet   <fct> no, yes, yes, yes, no, yes, yes, no, yes, yes, yes,...
## $ guardian   <fct> mother, father, mother, mother, father, mother, mot...
## $ traveltime <int> 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, ...
## $ studytime  <int> 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 3, 1, 2, 3, 1, 3, ...
## $ failures   <int> 0, 0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ schoolsup  <fct> yes, no, yes, no, no, no, no, yes, no, no, no, no, ...
## $ famsup     <fct> no, yes, no, yes, yes, yes, no, yes, yes, yes, yes,...
## $ paid       <fct> no, no, yes, yes, yes, yes, no, no, yes, yes, yes, ...
## $ activities <fct> no, no, no, yes, no, yes, no, no, no, yes, no, yes,...
## $ higher     <fct> yes, yes, yes, yes, yes, yes, yes, yes, yes, yes, y...
## $ romantic   <fct> no, no, no, yes, no, no, no, no, no, no, no, no, no...
## $ famrel     <int> 4, 5, 4, 3, 4, 5, 4, 4, 4, 5, 3, 5, 4, 5, 4, 4, 3, ...
## $ freetime   <int> 3, 3, 3, 2, 3, 4, 4, 1, 2, 5, 3, 2, 3, 4, 5, 4, 2, ...
## $ goout      <int> 4, 3, 2, 2, 2, 2, 4, 4, 2, 1, 3, 2, 3, 3, 2, 4, 3, ...
## $ Dalc       <int> 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ Walc       <int> 1, 1, 3, 1, 2, 2, 1, 1, 1, 1, 2, 1, 3, 2, 1, 2, 2, ...
## $ health     <int> 3, 3, 3, 5, 5, 5, 3, 1, 1, 5, 2, 4, 5, 3, 3, 2, 2, ...
## $ absences   <int> 5, 3, 8, 1, 2, 8, 0, 4, 0, 0, 1, 2, 1, 1, 0, 5, 8, ...
## $ G1         <int> 2, 7, 10, 14, 8, 14, 12, 8, 16, 13, 12, 10, 13, 11,...
## $ G2         <int> 8, 8, 10, 14, 12, 14, 12, 9, 17, 14, 11, 12, 14, 11...
## $ G3         <int> 8, 8, 11, 14, 12, 14, 12, 10, 18, 14, 12, 12, 13, 1...
## $ alc_use    <dbl> 1.0, 1.0, 2.5, 1.0, 1.5, 1.5, 1.0, 1.0, 1.0, 1.0, 1...
## $ high_use   <lgl> FALSE, FALSE, TRUE, FALSE, FALSE, FALSE, FALSE, FAL...
## Warning: attributes are not identical across measure variables;
## they will be dropped
## Observations: 13,752
## Variables: 2
## $ key   <chr> "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "X", "...
## $ value <chr> "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11",...
## Warning: attributes are not identical across measure variables;
## they will be dropped

## # A tibble: 4 x 4
## # Groups:   sex [?]
##   sex   high_use count mean_grade
##   <fct> <lgl>    <int>      <dbl>
## 1 F     FALSE      156       11.4
## 2 F     TRUE        42       11.7
## 3 M     FALSE      112       12.2
## 4 M     TRUE        72       10.3

Including Plots

## Warning: attributes are not identical across measure variables;
## they will be dropped

Analysis: there appears to be a greater proportion of “high use” individuals who live in rural areas compared to urban.

Analysis: there appears to be a greater proportion of “high use” individuals who studied less than 2 hours per week

Logistic Regression

m <- glm(high_use ~ address + absences + studytime + failures, data = alc, family = “binomial”)

Summary of the Model

# print out a summary of the model
m <- glm(high_use ~ address + absences + studytime + failures, data = alc, family = "binomial")
summary(m)
## 
## Call:
## glm(formula = high_use ~ address + absences + studytime + failures, 
##     family = "binomial", data = alc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.3106  -0.8274  -0.6498   1.1066   2.2127  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)  0.03894    0.42264   0.092 0.926596    
## addressU    -0.49810    0.27809  -1.791 0.073268 .  
## absences     0.07883    0.02288   3.446 0.000569 ***
## studytime   -0.49431    0.15835  -3.122 0.001798 ** 
## failures     0.35312    0.19100   1.849 0.064487 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 465.68  on 381  degrees of freedom
## Residual deviance: 426.83  on 377  degrees of freedom
## AIC: 436.83
## 
## Number of Fisher Scoring iterations: 4

Deviance Residuals: Min 1Q Median 3Q Max
-1.5716 -0.7977 -0.6399 1.1262 2.1197

Coefficients of the Model

## Waiting for profiling to be done...
##                    OR     2.5 %    97.5 %
## (Intercept) 1.0397050 0.4536726 2.3898611
## addressU    0.6076839 0.3532793 1.0542771
## absences    1.0820164 1.0367785 1.1342085
## studytime   0.6099923 0.4431742 0.8256399
## failures    1.4235086 0.9792692 2.0818432

An odds ratio (OR) is a measure of association between an exposure and an outcome. The OR represents the odds that an outcome will occur given a particular exposure, compared to the odds of the outcome occurring in the absence of that exposure.

Cross Tabulation

##         prediction
## high_use FALSE TRUE
##    FALSE   253   15
##    TRUE     89   25

##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.66230366 0.03926702 0.70157068
##    TRUE  0.23298429 0.06544503 0.29842932
##    Sum   0.89528796 0.10471204 1.00000000

Inaccurately Classified Individuals

## [1] 0.2722513

The training error rate here is 27.2%.

Bonus: 10-Fold Cross Validation

## [1] 0.2801047

Here the error rate is 28%.


IODS Week 4: Clustering and Classification

About the Data

This week, I investigated the Boston dataset which contains data of housing values in the suburbs of Boston. Boston dataset structure and dimensions are:

str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14

All variables are further explained HERE.

Graphing the Data

pairs(Boston, lower.panel = NULL)

This representation is difficult to interpret, so next, I will use a correlation matrix to more easily visualize the data.

cor_matrix<-cor(Boston) %>% round(digits=2)
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)

Here, it is much easier to see the correlations. According to this matrix rad (index of accessibility to radial highways) and tax (full-value property-tax rate per $10,000) have the highest positive correlation.

Standardizing the Data

After scaling, the summary of the data is now available:

summary(boston_scaled)
##       crim                 zn               indus        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202  
##       chas              nox                rm               age         
##  Min.   :-0.2723   Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331  
##  1st Qu.:-0.2723   1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366  
##  Median :-0.2723   Median :-0.1441   Median :-0.1084   Median : 0.3171  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.:-0.2723   3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059  
##  Max.   : 3.6648   Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164  
##       dis               rad               tax             ptratio       
##  Min.   :-1.2658   Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047  
##  1st Qu.:-0.8049   1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876  
##  Median :-0.2790   Median :-0.5225   Median :-0.4642   Median : 0.2746  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6617   3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058  
##  Max.   : 3.9566   Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372  
##      black             lstat              medv        
##  Min.   :-3.9033   Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.: 0.2049   1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median : 0.3808   Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4332   3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 0.4406   Max.   : 3.5453   Max.   : 2.9865

Now the data is normalized and mean centerd.

Next, I created a categorical variable crime of the crime rate (crim) by using its quantiles as break points. Then, I removed the old crime rate variable (crim) from the dataset and added the new variable crime. This was done with the following R code:

boston_scaled <- as.data.frame(boston_scaled)
bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=c("low","med_low","med_high","high"))
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)

The final step before performing the Linear Discriminant Analysis (LDA) is to separate the dataset into training (train) and testing (test) sets with a ratio of 4:1.

n <- nrow(boston_scaled)
ind <- sample(n,  size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]

Linear Discriminant Analysis

Next, I used the LDA on the training set train. This involves using the categorical crime rate crime as the target variable and all other variables as predictor variables.

lda.fit <- lda(crime ~ ., data = train)
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)

Here you can see that the high crime rate grouped more closely than the others.

Making Predictions

First, I saved the crime categories from the test set and then removed the categorical crime variable from the test dataset.

correct_classes <- test$crime
test <- dplyr::select(test, -crime)
lda.pred <- predict(lda.fit, newdata = test)

Now I can cross tabulate the results with the crime categories from the test set.

table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       17      13        1    0
##   med_low    3      15        4    0
##   med_high   1       8       14    1
##   high       0       0        0   25

From this table, we can see that the predicitons are fairly accurate depending on the category. In this model, the high was correctly predicted 25 times and incorrectly only 1. Med-high, however, was correctly predicted 14 times and incorrectly 5 times (4 in med_low and 1 in low).

CLusters/Distance Measures

First, the dataset “Boston” has to be re-loaded and re-scaled.

data("Boston")
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)

Using the euclidean and manhattan distances, the distance between observations can be measured. Then, a k-means algorithm can be run on the dataset.

km <-kmeans(boston_scaled, centers = 5)
summary(km)
##              Length Class  Mode   
## cluster      506    -none- numeric
## centers       70    -none- numeric
## totss          1    -none- numeric
## withinss       5    -none- numeric
## tot.withinss   1    -none- numeric
## betweenss      1    -none- numeric
## size           5    -none- numeric
## iter           1    -none- numeric
## ifault         1    -none- numeric

Finally, I can investigate the optimal number of clusters then run the algorithm again.

k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')

It seems that the WCSS decreases significantly between 2-3, so I tried with just 2 clusters.

km <-kmeans(boston_scaled, centers = 2)
pairs(boston_scaled[3:7], col = km$cluster, lower.panel = NULL)

km <-kmeans(boston_scaled, centers = 3)
pairs(boston_scaled[3:7], col = km$cluster, lower.panel = NULL)

With just visual inspection, it appears that 2 clusters appears to be the best option.